Local optima smoothing for global optimization
نویسندگان
چکیده
منابع مشابه
Local optima smoothing for global optimization
It is widely believed that in order to solve large scale global optimization problems an appropriate mixture of local approximation and global exploration is necessary. Local approximation, if first order information on the objective function is available, is efficiently performed by means of local optimization methods. Unfortunately, global exploration, in absence of some kind of global inform...
متن کاملA Heuristic Smoothing Procedure for Avoiding Local Optima in Optimization of Structures Subject to Unilateral Constraints
Structural optimization problems are often solved by gradient based optimization algorithms, e.g. sequential quadratic programming or the method of moving asymptotes. If the structure is subject to unilateral constraints, then the gradient may be non-existent for some designs. It follows that difficulties may arise when such structures are to be optimized using gradient based optimization algor...
متن کاملVirtuous smoothing for global optimization
In the context of global optimization and mixed-integer nonlinear programming, generalizing a technique of D’Ambrosio, Fampa, Lee and Vigerske for handling the square-root function, we develop a virtuous smoothing method, using cubics, aimed at functions having some limited nonsmoothness. Our results pertain to root functions (wp with 0 < p < 1) and their increasing concave relatives. We provid...
متن کاملOn local optima in multiobjective combinatorial optimization problems
In this article, local optimality in multiobjective combinatorial optimization is used as a baseline for the design and analysis of two iterative improvement algorithms. Both algorithms search in a neighborhood that is defined on a collection of sets of feasible solutions and their acceptance criterion is based on outperformance relations. Proofs of the soundness and completeness of these algor...
متن کاملPorcupine Neural Networks: (Almost) All Local Optima are Global
Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes their performance analysis challenging. In this paper, we take a novel approach to this problem by asking whether one can constrain neural network weights to make its optimization landscape have good theoretical ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Optimization Methods and Software
سال: 2005
ISSN: 1055-6788,1029-4937
DOI: 10.1080/10556780500140029